Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text summarization.
translated by 谷歌翻译
The recent success of large language models for text generation poses a severe threat to academic integrity, as plagiarists can generate realistic paraphrases indistinguishable from original work. However, the role of large autoregressive transformers in generating machine-paraphrased plagiarism and their detection is still developing in the literature. This work explores T5 and GPT-3 for machine-paraphrase generation on scientific articles from arXiv, student theses, and Wikipedia. We evaluate the detection performance of six automated solutions and one commercial plagiarism detection software and perform a human study with 105 participants regarding their detection performance and the quality of generated examples. Our results suggest that large models can rewrite text humans have difficulty identifying as machine-paraphrased (53% mean acc.). Human experts rate the quality of paraphrases generated by GPT-3 as high as original texts (clarity 4.0/5, fluency 4.2/5, coherence 3.8/5). The best-performing detection model (GPT-3) achieves a 66% F1-score in detecting paraphrases.
translated by 谷歌翻译
DBLP是计算机科学科学文章的最大开放访问存储库,并提供与出版物,作者和场地相关的元数据。我们从DBLP中检索了超过600万个出版物,并从出版物文本中提取了相关的元数据(例如摘要,作者分支机构,引用),以创建DBLP Discovery Dataset(D3)。 D3可用于确定计算机科学研究的研究活动,生产力,偏见,可及性和影响的趋势。我们提出了针对计算机科学研究量(例如论文,作者,研究活动的数量),感兴趣主题和引文模式的初步分析。我们的发现表明,计算机科学是一个不断增长的研究领域(每年约15%),拥有一个积极的协作研究员社区。与前几十年相比,近年来的论文提供了更多的书目条目,但引用的平均数量仍在下降。调查论文的摘要表明,最近的主题趋势在D3中明显反映。最后,我们列出了D3和提出补充研究问题的进一步应用。 D3数据集,我们的发现和源代码可公开用于研究目的。
translated by 谷歌翻译
潜在的生命危及危及生命的错误信息急剧上升是Covid-19大流行的副产品。计算支持,以识别关于该主题的大规模数据内的虚假信息至关重要,以防止伤害。研究人员提出了许多用于标记与Covid-19相关的在线错误信息的方法。但是,这些方法主要针对特定​​的内容类型(例如,新闻)或平台(例如,Twitter)。概括的方法的能力在很大程度上尚不清楚。我们在五十个COVID-19错误信息数据集中评估基于15个变压器的模型,包括社交媒体帖子,新闻文章和科学论文来填补这一差距。我们向Covid-19数据量身定制的标记和模型不提供普通目的的数据的显着优势。我们的研究为检测Covid-19错误信息的模型提供了逼真的评估。我们预计评估广泛的数据集和模型将使未来的开发错误信息检测系统进行未来的研究。
translated by 谷歌翻译
BERT等语言模型的兴起允许高质量的文本释义。这是学术完整性的问题,因为很难区分原始和机器生成的内容。我们提出了一种由依赖于变压器架构的最近语言模型组成的基准。我们的贡献促进了对解释检测系统的未来研究,因为它提供了一系列对齐的原始和解剖文件,了解其结构,具有最先进系统的分类实验,我们将我们的调查结果公开提供。
translated by 谷歌翻译
雇用措施恳求抄袭文本的措施是对学术诚信的严重威胁。要启用检测机释录的文本,我们会评估五个预先训练的单词嵌入模型的有效性与机器学习分类器和最先进的神经语言模型相结合。我们分析了研究论文,毕业论文和维基百科文章的预印刷品,我们使用不同的工具SpinBot和Spinnerchief释放。最佳的表演技术,啰素,平均F1得分为80.99%(F1 = 99.68%,纺纱病例的F1 = 71.64%),而人类评估员均达到纺纱病例的F1 = 78.4%,F1 = 65.6%的纺纱病例。我们表明,自动分类减轻了广泛使用的文本匹配系统的缺点,例如金风格和Plagscan。为了促进未来的研究,所有数据,代码和两个展示我们贡献的Web应用程序都公开使用。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
In modern business processes, the amount of data collected has increased substantially in recent years. Because this data can potentially yield valuable insights, automated knowledge extraction based on process mining has been proposed, among other techniques, to provide users with intuitive access to the information contained therein. At present, the majority of technologies aim to reconstruct explicit business process models. These are directly interpretable but limited concerning the integration of diverse and real-valued information sources. On the other hand, Machine Learning (ML) benefits from the vast amount of data available and can deal with high-dimensional sources, yet it has rarely been applied to being used in processes. In this contribution, we evaluate the capability of modern Transformer architectures as well as more classical ML technologies of modeling process regularities, as can be quantitatively evaluated by their prediction capability. In addition, we demonstrate the capability of attentional properties and feature relevance determination by highlighting features that are crucial to the processes' predictive abilities. We demonstrate the efficacy of our approach using five benchmark datasets and show that the ML models are capable of predicting critical outcomes and that the attention mechanisms or XAI components offer new insights into the underlying processes.
translated by 谷歌翻译
优化组合结构是许多现实世界中的核心,例如生命科学中遇到的问题。例如,抗体设计中涉及的关键步骤之一是在蛋白质序列中找到氨基酸的排列,以改善其与病原体的结合。由于极大的搜索空间和非线性目标,很难对抗体进行组合优化。即使对于适度的抗体设计问题,蛋白质的序列长度为11,我们也面临着超过2.05 x 10^14结构的搜索。应用传统的增强学习算法,例如Q-学习算法来组合优化,导致性能差。我们提出了结构化Q学习(SQL),这是Q学习的扩展,该Q学习结合了结构性先验,以进行组合优化。使用分子对接模拟器,我们证明了SQL可以找到高结合能序列,并在八个具有挑战性的抗体设计任务上对基准的表现良好,包括设计SARS-COV的抗体。
translated by 谷歌翻译
在不完整的数据集中对样本进行分类是机器学习从业人员的普遍目的,但并非平凡。在大多数现实世界数据集中发现缺失的数据,这些缺失值通常是使用已建立的方法估算的,然后进行分类现在完成,估算的样本。然后,机器学习研究人员的重点是优化下游分类性能。在这项研究中,我们强调必须考虑插补的质量。我们展示了如何评估质量的常用措施有缺陷,并提出了一类新的差异评分,这些分数着重于该方法重新创建数据的整体分布的程度。总而言之,我们强调了使用不良数据训练的分类器模型的可解释性损害。
translated by 谷歌翻译